Parity Space-Based Fault Detection by Minimum Error Minimax Probability Machine
نویسندگان
چکیده
منابع مشابه
The Minimum Error Minimax Probability Machine
We construct a distribution-free Bayes optimal classifier called the Minimum Error Minimax Probability Machine (MEMPM) in a worst-case setting, i.e., under all possible choices of class-conditional densities with a given mean and covariance matrix. By assuming no specific distributions for the data, our model is thus distinguished from traditional Bayes optimal approaches, where an assumption o...
متن کاملSelection Based on Minimum Error 5 Minimax Probability Machine
Feature selection is an important task in pattern recognition. Support Vector Machine (SVM) and Minimax Probability Machine (MPM) have been successfully used as the 15 classification framework for feature selection. However, these paradigms cannot automatically control the balance between prediction accuracy and the number of selected 17 features. In addition, the selected feature subsets are a...
متن کاملRobust Minimax Probability Machine Regression Robust Minimax Probability Machine Regression
We formulate regression as maximizing the minimum probability (Ω) that the true regression function is within ±2 of the regression model. Our framework starts by posing regression as a binary classification problem, such that a solution to this single classification problem directly solves the original regression problem. Minimax probability machine classification (Lanckriet et al., 2002a) is u...
متن کاملTransductive Minimax Probability Machine
The Minimax Probability Machine (MPM) is an elegant machine learning algorithm for inductive learning. It learns a classifier that minimizes an upper bound on its own generalization error. In this paper, we extend its celebrated inductive formulation to an equally elegant transductive learning algorithm. In the transductive setting, the label assignment of a test set is already optimized during...
متن کاملMinimax Probability Machine
When constructing a classifier, the probability of correct classification of future data points should be maximized. In the current paper this desideratum is translated in a very direct way into an optimization problem, which is solved using methods from convex optimization. We also show how to exploit Mercer kernels in this setting to obtain nonlinear decision boundaries. A worst-case bound on...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IFAC-PapersOnLine
سال: 2018
ISSN: 2405-8963
DOI: 10.1016/j.ifacol.2018.09.568